skip to main content


Search for: All records

Creators/Authors contains: "Lee, Adam"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Explanations of AI Agents' actions are considered to be an important factor in improving users' trust in the decisions made by autonomous AI systems. However, as these autonomous systems evolve from reactive, i.e., acting on user input, to proactive, i.e., acting without requiring user intervention, there is a need to explore how the explanation for the actions of these agents should evolve. In this work, we explore the design of explanations through participatory design methods for a proactive auto-response messaging agent that can reduce perceived obligations and social pressure to respond quickly to incoming messages by providing unavailability-related context. We recruited 14 participants who worked in pairs during collaborative design sessions where they reasoned about the agent's design and actions. We qualitatively analyzed the data collected through these sessions and found that participants' reasoning about agent actions led them to speculate heavily on its design. These speculations significantly influenced participants' desire for explanations and the controls they sought to inform the agents' behavior. Our findings indicate a need to transform users' speculations into accurate mental models of agent design. Further, since the agent acts as a mediator in human-human communication, it is also necessary to account for social norms in its explanation design. Finally, user expertise in understanding their habits and behaviors allows the agent to learn from the user their preferences when justifying its actions.

     
    more » « less
    Free, publicly-accessible full text available September 11, 2024
  2. IoT devices like smart cameras and speakers provide convenience but can collect sensitive information within private spaces. While research has investigated user perception of comfort with information flows originating from these types of devices, little focus has been given to the role of the sensing hardware in influencing these sentiments. Given the proliferation of trusted execution environments (TEEs) across commodity- and server-class devices, we surveyed 1049 American adults using the Contextual Integrity framework to understand how the inclusion of cloud-based TEEs in IoT ecosystems may influence comfort with data collection and use. We find that cloud-based TEEs significantly increase user comfort across information flows. These increases are more pronounced for devices manufactured by smaller companies and show that cloud-based TEEs can bridge the previously-documented gulfs in user trust between small and large companies. Sentiments around consent, bystander data, and indefinite retention are unaffected by the presence of TEEs, indicating the centrality of these norms. 
    more » « less
    Free, publicly-accessible full text available July 1, 2024
  3. With the rising popularity of photo sharing in online social media, interpersonal privacy violations, where one person violates the privacy of another, have become an increasing concern. Although applying image obfuscations can be a useful tool for improving privacy when sharing photos, prior studies have found these obfuscation techniques adversely affect viewers' satisfaction. On the other hand, ephemeral photos, popularized by apps such as Snapchat, allow viewers to see the entire photo, which then disappears shortly thereafter to protect privacy. However, people often use workarounds to save these photos before deletion. In this work, we study people's sharing preferences with two proposed 'temporal redactions', which combines ephemerality with redactions to allow viewers to see the entire image, yet make these images safe for longer storage through a gradual or delayed application of redaction on the sensitive portions of the photo. We conducted an online experiment (N=385) to study people's sharing behaviors in different contexts and under different levels of assurance provided by the viewer's platform (e.g., guaranteeing temporal redactions are applied through the use of 'trusted hardware'). Our findings suggest that the proposed temporal redaction mechanisms are often preferred over existing methods. On the other hand, more efforts are needed to convey the benefits of trusted hardware to users, as no significant differences were observed in attitudes towards 'trusted hardware' on viewers' devices. 
    more » « less
  4. Smart voice assistants such as Amazon Alexa and Google Home are becoming increasingly pervasive in our everyday environments. Despite their benefits, their miniaturized and embedded cameras and microphones raise important privacy concerns related to surveillance and eavesdropping. Recent work on the privacy concerns of people in the vicinity of these devices has highlighted the need for 'tangible privacy', where control and feedback mechanisms can provide a more assured sense of whether the camera or microphone is 'on' or 'off'. However, current designs of these devices lack adequate mechanisms to provide such assurances. To address this gap in the design of smart voice assistants, especially in the case of disabling microphones, we evaluate several designs that incorporate (or not) tangible control and feedback mechanisms. By comparing people's perceptions of risk, trust, reliability, usability, and control for these designs in a between-subjects online experiment (N=261), we find that devices with tangible built-in physical controls are perceived as more trustworthy and usable than those with non-tangible mechanisms. Our findings present an approach for tangible, assured privacy especially in the context of embedded microphones.

     
    more » « less
  5. To facilitate the adoption of cloud by organizations, Cryptographic Access Control (CAC) is the obvious solution to control data sharing among users while preventing partially trusted Cloud Service Providers (CSP) from accessing sensitive data. Indeed, several CAC schemes have been proposed in the literature. Despite their differences, available solutions are based on a common set of entities—e.g., a data storage service or a proxy mediating the access of users to encrypted data—that operate in different (security) domains—e.g., on-premise or the CSP. However, the majority of these CAC schemes assumes a fixed assignment of entities to domains; this has security and usability implications that are not made explicit and can make inappropriate the use of a CAC scheme in certain scenarios with specific trust assumptions and requirements. For instance, assuming that the proxy runs at the premises of the organization avoids the vendor lock-in effect but may give rise to other security concerns (e.g., malicious insiders attackers). To the best of our knowledge, no previous work considers how to select the best possible architecture (i.e., the assignment of entities to domains) to deploy a CAC scheme for the trust assumptions and requirements of a given scenario. In this article, we propose a methodology to assist administrators in exploring different architectures for the enforcement of CAC schemes in a given scenario. We do this by identifying the possible architectures underlying the CAC schemes available in the literature and formalizing them in simple set theory. This allows us to reduce the problem of selecting the most suitable architectures satisfying a heterogeneous set of trust assumptions and requirements arising from the considered scenario to a decidable Multi-objective Combinatorial Optimization Problem (MOCOP) for which state-of-the-art solvers can be invoked. Finally, we show how we use the capability of solving the MOCOP to build a prototype tool assisting administrators to preliminarily perform a “What-if” analysis to explore the trade-offs among the various architectures and then use available standards and tools (such as TOSCA and Cloudify) for automated deployment in multiple CSPs. 
    more » « less
  6. null (Ed.)
  7. null (Ed.)
  8. null (Ed.)